A Simple Sequential Algorithm for Approximating Bayesian Inference
نویسندگان
چکیده
People can apparently make surprisingly sophisticated inductive inferences, despite the fact that there are constraints on cognitive resources that would make performing exact Bayesian inference computationally intractable. What algorithms could they be using to make this possible? We show that a simple sequential algorithm, Win-Stay, Lose-Shift (WSLS), can be used to approximate Bayesian inference, and is consistent with human behavior on a causal learning task. This algorithm provides a new way to understand people’s judgments and a new efficient method for performing Bayesian inference.
منابع مشابه
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian infere...
متن کاملTechnical Report PARG-10-01: Sampling for Bayesian Quadrature
We propose a novel form of sequential Monte Carlo integration that emerges from a decisiontheoretic treatment of approximation. Quadrature of any kind requires a set of samples of the integrand. Bayesian quadrature [O’Hagan, 1991, Rasmussen and Ghahramani, 2003] employs those samples within a Gaussian process framework to perform inference about unobserved regions of the space, and hence about ...
متن کاملInference for a class of partially observed point process models
This paper presents a simulation-based framework for sequential inference from partially and discretely observed point process models with static parameters. Taking on a Bayesian perspective for the static parameters, we build upon sequential Monte Carlo methods, investigating the problems of performing sequential filtering and smoothing in complex examples, where current methods often fail. We...
متن کاملA simple general formula for tail probabilitiesfor frequentist and Bayesian inference
SUMMARY We describe a simple general formula for approximating the p-value for testing a scalar parameter in the presence of nuisance parameters. The formula covers both frequentist and Bayesian contexts and does not require explicit nuisance parame-terisation. Implementation is discussed in terms of computer algebra packages. Examples are given and the relationship to Barndorr-Nielsen's approx...
متن کاملApproximating Posterior Distributions in Belief Networks Using Mixtures
Exact inference in densely connected Bayesian networks is computationally intractable, and so there is considerable interest in developing effective approximation schemes. One approach which has been adopted is to bound the log likelihood using a mean-field approximating distribution. While this leads to a tractable algorithm, the mean field distribution is assumed to be factorial and hence uni...
متن کامل